Calibrated asymmetric surrogate losses
نویسندگان
چکیده
منابع مشابه
On Structured Prediction Theory with Calibrated Convex Surrogate Losses
We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called “calibration function” relating the excess surrogate risk to the actual risk. In contrast to p...
متن کاملCalibrated Surrogate Losses for Classification with Label-Dependent Costs
We present surrogate regret bounds for arbitrary surrogate losses in the context of binary classification with label-dependent costs. Such bounds relate a classifier’s risk, assessed with respect to a surrogate loss, to its cost-sensitive classification risk. Two approaches to surrogate regret bounds are developed. The first is a direct generalization of Bartlett et al. [2006], who focus on mar...
متن کاملConsistency of Surrogate Risk Minimization Methods for Binary Classification using Classification Calibrated Losses
In the previous lecture, we saw that for a λ−strongly proper composite loss ψ, it is possible to bound the 0 − 1 regret in terms of its ψ−regret. Hence, for λ−strongly proper composite loss ψ, if we have a ψ− consistent algorithm, we can use it to obtain a 0 − 1 consistent algorithm. However, not all loss functions used as surrogates in binary classification are proper, the hinge loss being one...
متن کامل"On the (Non-)existence of Convex, Calibrated Surrogate Losses for Ranking"
We study surrogate losses for learning to rank, in a framework where the rankings are induced by scores and the task is to learn the scoring function. We focus on the calibration of surrogate losses with respect to a ranking evaluation metric, where the calibration is equivalent to the guarantee that near-optimal values of the surrogate risk imply near-optimal values of the risk defined by the ...
متن کاملSurrogate Losses in Passive and Active Learning
Active learning is a type of sequential design for supervised machine learning, in which the learning algorithm sequentially requests the labels of selected instances from a large pool of unlabeled data points. The objective is to produce a classifier of relatively low risk, as measured under the 0-1 loss, ideally using fewer label requests than the number of random labeled data points sufficie...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Journal of Statistics
سال: 2012
ISSN: 1935-7524
DOI: 10.1214/12-ejs699